8,221 research outputs found

    Competition or collaboration? The reciprocity effect in loan syndication

    Get PDF
    It is well recognized that loan syndication generates a moral hazard problem by diluting the lead arranger's incentive to monitor the borrower. This paper proposes and tests a novel view that reciprocal arrangements among lead arrangers serve as an effective mechanism to mitigate this agency problem. Lender arrangements in about seven out of ten syndicated loans are reciprocal in the sense that lead arrangers also participate in loans that are led by their participant lenders. I develop a model in which syndicate lenders share reciprocity through such arrangements in a repeated-game setting as monitoring effort enhances lead arrangers' ability to profit from participating in loans led by others. The model generates specific predictions that I then confront with the data. I find strong and consistent empirical evidence on the reciprocity effect. Controlling for lender, borrower, and loan characteristics, I show that: (i) lead arrangers retain on average 4.3% less of the loans with reciprocity than those without reciprocity, (ii) the average interest spread over LIBOR on drawn funds is 11 basis points lower on loans with reciprocity, and (iii) the default probability is 4.7% lower among loans with reciprocity. These results indicate a cooperative equilibrium in loan syndication and have important implications to lending institutions, borrowing firms, and regulators.Loans

    Guarantees of Total Variation Minimization for Signal Recovery

    Full text link
    In this paper, we consider using total variation minimization to recover signals whose gradients have a sparse support, from a small number of measurements. We establish the proof for the performance guarantee of total variation (TV) minimization in recovering \emph{one-dimensional} signal with sparse gradient support. This partially answers the open problem of proving the fidelity of total variation minimization in such a setting \cite{TVMulti}. In particular, we have shown that the recoverable gradient sparsity can grow linearly with the signal dimension when TV minimization is used. Recoverable sparsity thresholds of TV minimization are explicitly computed for 1-dimensional signal by using the Grassmann angle framework. We also extend our results to TV minimization for multidimensional signals. Stability of recovering signal itself using 1-D TV minimization has also been established through a property called "almost Euclidean property for 1-dimensional TV norm". We further give a lower bound on the number of random Gaussian measurements for recovering 1-dimensional signal vectors with NN elements and KK-sparse gradients. Interestingly, the number of needed measurements is lower bounded by Ω((NK)12)\Omega((NK)^{\frac{1}{2}}), rather than the O(Klog(N/K))O(K\log(N/K)) bound frequently appearing in recovering KK-sparse signal vectors.Comment: lower bounds added; version with Gaussian width, improved bounds; stability results adde
    corecore